Bitcoins consist of specific private and public keys. The user holds the private key in a software "wallet," but the actual currency - the bitcoin - is inaccessible until the private key is verified through a public ledger. Therefore, the user cannot access funds without the private keys, and a hard drive failure or other sudden event can completely eliminate the bitcoins.

"If you lose your private key, your bitcoins are completely inaccessible," said Ben Carmitchel, President of DataRecovery.com. "At that point, the only option is to pay for professional data recovery or to cut your losses."

The user's private key resides in the wallet.dat file, which can be backed up to a hard drive, USB stick, or any other type of storage device. However, a single backup will not permanently protect the user from data loss in some circumstances.

"Because of the way that Bitcoin transactions work, users need to create regular backups of their wallets," said Carmitchel. "Otherwise, some bitcoins could become permanently inaccessible, even if a user has an older backup. This is particularly important for people who use bitcoins on an everyday basis and for people who handle a lot of transactions."

Carmitchel recommends regular encrypted backups to protect against data loss. Up-to-date encryption helps to prevent hacking, and DataRecovery.com can recover encrypted information without putting the data at risk.

"Data recovery should be a last resort, particularly for anyone who invests heavily in Bitcoin," Carmitchel said. "There's no substitute for a good backup strategy, but we're always ready to help Bitcoin users when a server or a hard drive fails."
 
If you’ve built something yourself rather than buy it, like a book shelf or a bird house, you know the satisfaction of shaping something to your needs. And as long as nothing goes wrong, you’re in good shape. But if it breaks you can’t return it to the store for an exchange; you have to fix it yourself. And while repairing a bookshelf is one thing, recovering applications in a data center when they fail is something else entirely.

Linux is an excellent tool for creating the IT environment you want. Its flexibility and open-source architecture mean you can use it to support nearly any need, running mission-critical systems effectively while keeping costs low. This flexibility, however, means that if something does go wrong, it’s up to you to ensure your business operations can continue without disruption. And while many disaster recovery solutions focus on recovering data in case of an outage, leaving it at that is leaving the job half done. Having the information itself will be useless if the applications that are running it don’t function, and you are unable to meet SLAs.

Businesses that value the independence Linux provides can benefit from partnering with a technology provider that can keep their business running in the event of disaster. And as we have seen all too frequently in the last several years, disasters happen to organizations of all sizes, from natural disasters to large-scale hacks that take down servers company-wide. It seems that every week we hear in the news about another large company that is experiencing a significant service failure.

As you consider what to look for in solutions to keep your Linux-heavy data center up and running, we recommend focusing on the following criteria:

* Speed of failure detection and recovery: Every minute counts when it comes to business downtime. The first step to effective recovery is rapid detection of failure – even the best recovery solution will be insufficient if the detection process itself takes minutes rather than seconds. The ideal tool should provide fast detection with minimal resource usage, to meet recovery time objectives.

* Failover that covers the entire range of business services: Business-critical applications may require the preservation of several different layers of the information stack, to perform complementary processes such as the Web functionality, the application itself and the databases feeding it information. High availability can be a challenge when complex recovery is needed. Be sure your backup and recovery solutions can handle the interconnected processes necessary to maintain business operations.

 
Big data implementations that produce value drive lower costs and higher profits. Hence, at some point you must implement a disaster recovery plan. And, since this requirement may come with little warning, the database administrator and other support staff should take proactive steps during the first big data implementation.

Review storage needs, network capacity, hardware capabilities and software license requirements at the beginning of your implementation. Have this data published and available to management before it becomes critical. This allows your enterprise to budget and plan for its needs in advance.

Both application designers and database administrators sometimes take the simplistic view that regular backups of application data are sufficient for any recovery needs. The strategy of weekend backups can easily backfire!  Backup methods that meet the application’s and enterprise’s needs start with a sound recovery strategy.  Further, this strategy must be applied from the beginning, starting with the big data database and application design.

Two factors drive which recovery options are used for a big data application:

The recovery time objective (RTO) -- During a recovery scenario, how long can the application data (or portions of the data) be unavailable?
The recovery point objective (RPO) -- During a recovery scenario, to what point must data be recovered?  To a specific date/time? To the end of the most recently completed transaction?
For a big data implementation, the choice of recovery point is straightforward. The most common situation is a period of extract, transform, and load (ETL) of operational data from legacy systems into the big data store, followed by multiple analytical applications that query the data. The most commonly chosen recovery point is immediately after loading is complete.

Backup and recovery strategies are driven by this choice. For example, if the preferred method of backup is database image copies, these can be scheduled to begin at the time of the recovery point. These backups will not interfere with applications because analytics involves querying, not updating. Of course, the database administrator must ensure that all backups complete within a reasonable time; backups taking more than 24 hours will interfere with the next day’s processing.

Recovery time requirements are also easily defined. The recovery process must have data available for analytics within about 24 hours time. Any longer, and the recovery site may not be able to catch up with the additional daily operational data that must now be loaded.

Database administrators should elicit basic recovery time and recovery point objectives for any big data implementation as early as possible. Then they should review backup and recovery options, choose methods and procedures that meet the objectives, and document the results for future reference. As applications mature and the enterprise big data store grows, the designation of your big data as mission-critical won’t catch you unprepared.


Another common objection to doing disaster recovery planning for big data is the sheer size of the data store. Infrastructure staff believe that such a huge volume of data will take forever to back up, forever to recover, and take up immense quantities of backup storage. Luckily, several recent technical advances in hardware can mitigate these worries.


Most modern disk storage equipment has an optional disk mirroring facility. Mirroring is a process where changes on one disk drive are made to a corresponding dive in another location. This allows the support staff to implement disk copying and backup, data replication, and publish-subscribe applications using available hardware features without the need to code applications.

For backup and recovery purposes, the storage administrator designates a set of disk or a disk array to be the mirror (or backup) of another. The primary disks can be those of the big data store, with an array of backup disks in a secure location used as the mirrors. When the primary disks are updated during the ETL process the hardware automatically makes those updates on the mirrors.  In case of a disaster, the storage administrator defines the mirror disks to the operating system as the primary ones.  Suddenly, the data is restored and the application is available.
 
Nine in ten companies expect the cost of backup and recovery to shoot up over the next five years, according to recently published research.

The study carried out by cloud backup vendor Asigra said the figure meant there was a willingness to switch to what is calls “recovery-based pricing” – where users pay for what is recovered, when it is recovered. This, it said, showed that fear of cost growth is a strong motivator to make the switch.

Savings to archiveAsigra said recovery-based pricing is based on a low, limited recovery cost so that backup technology expenditures remain affordable even as data volumes rise. It claimed companies adopting such pricing models would see immediate savings of 40 percent and long-term savings of up to 70 percent.

The survey of 161 CIOs, IT executives and IT staff found that 84 percent of all respondents would be either “Likely” or “Very Likely” to switch to recovery-based pricing immediately if it offered the same technical capabilities as their current software. This was also made more likely because 86 percent said they thought costs would increase over the next five years, compared with just 12 percent who thought costs would decrease.

The company also found that companies with high data growth rates were more likely to switch to recovery-based pricing. Similarly, and as might have been expected, respondents with larger data backup volumes were shown to be more likely to switch to recovery-based pricing than those with small data backup volumes, making large enterprises more interested in recovery-based pricing.

Eran Farajun, executive vice president for Asigra, said pricing alternatives for technologies have evolved significantly as cloud computing has entered the spotlight.

“The need for change in the pricing of software and software-based services has finally come to a point where users are more educated on the topic and demanding that pricing be aligned with business value,” he said.

 
With the stock currently fetching 11 times his 2015 EPS estimate of 69 cents, which is unchanged today, LSI has “one of the lowest valuations in our coverage group on cross cycle estimates,” writes Moore.

Within the traditional hard disk drive controller chip business, where LSI has only one-third of the market versus two thirds for competitor Marvell Technology Group (MRVL), Moore thinks LSI has a chance to take share in the coming 12 months:

LSI is coming off a difficult year in 2013, as Marvell took away most of the Seagatenotebook business, and has been gaining share within Seagate enterprise. The offset to that, the LSI wins within a new customer (we believe Western Digital in the 1 TB desktop platform), have been slow to materialize. But with management saying on the earnings call that a new customer would ship “10s of millions of units” next year (at a $3.50 or so ASP, we believe) there is likely growth potential in HDD next year. We would anticipate that this WDC share remains controversial until LSI actually shows up in teardowns, as Marvell’s execution in hard drive SOCs has been very strong over the years. But we do think that the improved profitability of the hard drive industry warrants the investment required to maintain a second source. We assume 25 mm incremental units next year at $3.50, for about 3% incremental revenue growth. We have also seen Seagate’s enterprise business oscillate between LSI and Marvell in recent years, with Marvell taking significant share in 2013. We think that Seagate plans around this oscillation to essentially have dual custom development teams at their disposal. At the 28 nm node, we do think that LSI takes some of that share back over the course of next year. Overall, we project 5% growth in LSI’s HDD business in 2014, healthy for a business that has served as the primary growth overhang on the stock.

As far as solid-state drives, or SSDs, LSI had a tough year as personal computer use of the drives remained muted because of stable pricing of flash memory chips, which held back broader adoption, he writes. And flash suppliers such as Toshiba cozied up to Apple (AAPL) and Marvell, while the third-party flash plug-in card market where lsi sells controllers came under pressure. But Moore sees brighter times for LSI in 2014 serving a bunch of companies looking for merchant silicon to act as drive controllers:

Samsung (which makes it’s own controllers) gained substantial share, shrinking the merchant market; we estimate that Samsung has just over 30% share. LSI does participate with Samsung on SATA solutions but it’s a minority of their business. But Micron, Intel, Sandisk, and Toshiba all rely on 3rd party SSD controllers (some of them SoCs from Marvell or LSI which use some of their own IP). Hynix has tried to use Linkamedia to build SSD controllers but has struggled to drive any traction in the market. The ASIC/semi-custom approach is likely to become more the norm within this space.

 
Western Digital Hard Disks are available for different desktops and portable devices. These disks come shipped with SATA, SCSI and IDE interfaces. Different hard drive families are MOMMOTH, SABRE, HAWK, SCORPIO and BUCCANEER. At times, you observe that the system that is installed with a Western Digital disk fails to detect it.

BIOS no longer recognizes it, you come across failure error messages, you observe typical scratching and grinding noises from disk or the drive appears to be completely dead. In such situations, it is required that you immediately replace the disk as it might be physically damaged and consult Hard Disk Recovery technicians to extract lost data manually.

For example, when you try to start the computer installed with a Western Digital disk, you might fail to do so with the following error:

"WDC ROM MODEL--"

Here, ' disk family name' denotes the name of Western Digital hard drive family.

Such errors are the general indicative of physical disk failure. You receive this error in BIOS screen that is generated by PCB controller board when it cannot detect the it healthy. Few possible causes for such behavior are:

1.Hard disk ROM is corrupted 
2.Corruption of one or more firmware modules 
3.One of more read/write heads are faulty

You need to follow these steps in order to recover from the failure: Power down the system as soon as possible. Detach the drive from the system. Install a new disk. Pack the failed disk in anti-static and anti-shock material and send it to a reliable company providing Data Recovery Services